#15 FLAN: Finetuned Language Models are Zero-Shot Learners (Google Research’s Paper Explained!)

Date:

This paper introduces the instruction tuning method to train their model FLAN (Finetuned LAnguage Net) in a zero-shot fashion, which is able to beat GPT-3’s zero-shot benchmarks on 19 of 25 tasks and even outperforms few-shot GPT-3 benchmarks on 10 of their tasks! Link for the video